home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
The X-Philes (2nd Revision)
/
The X-Philes Number 1 (1995).iso
/
xphiles
/
hp48hor1
/
ceigen.src
< prev
next >
Wrap
Text File
|
1991-07-24
|
12KB
|
336 lines
%%HP: T(3)A(R)F(.);
@ Author: Ted A Smith
@ Date: Sun July 21, 1991
@
@ Here is my second cut at EigenValue/EigenVector decomposition
@ which is not restricted to real symmetric input matricies. These
@ routines have been optimized (and accuracy improved) compared to my
@ first try dated Jul 18 1991.
@
@ Checksum of this directory on the stack: #2B90h, size 2982.5
@ Checksum with only Eigen, EFn1 and EFn2: #F1BBh, size 1103.5
@
@ The general eigenvalue/eigenvector problem is hard and there are
@ many posible pathologies if the input matrix is not real symmetric.
@ (For theory and examples see the references below.)
@
@ The directory consists of 3 main routines:
@
@ Eigen Given M returns V and D, where V are the eigenvectors
@ and D are the eigenvalues
@ EFn1 Given M and f returns f(M) defined by V*f(D)*V^-1
@ EFn2 Given V, D and f returns V*f(D)*V^-1
@
@ For example, in analogy with SIN(x)^2+COS(x)^2=1:
@
@ A Eigen DUP2 \<< SIN \>> EFn2 DUP *
@ ROT ROT \<< COS \>> EFn2 DUP * + A IDN - ABS
@
@ will return a small number for almost any square input matrix A.
@
@ Eigen also displays the matrix after each rotation if user flag 1 is
@ set. Two comments in the code indicate which lines to delete if this
@ feature is not desired.
@
@ There are two test routines:
@
@ Tst Calculate ABS(DET(M-D*I)) and ABS(M-EFn1(M, \<< \>>))
@ Tst1 Apply SIN(x)^2+COS(x)^2=1 test to a random matrix.
@
@ There are 9 example matricies A, B, C, D, E, F, G, H and J as well as
@ the 3x3 identity matrix I. The comments in the code give the true
@ eigenvalues for some of these matricies.
@
@ Here following is the theory of operation:
@
@ The QR algorithm is, in theory, a better method; but it is much
@ more complicated. The virtue of this implementation is its simplicity.
@ It is based on the Jacobi method, which uses trignometric rotations to
@ succesively zero the off-diagonal elements of the input matrix. When
@ the off-diagonal elements are close enough to zero the diagonal
@ elements approximate the eigenvalues of the original matrix. In
@ addition, accumulating the product of the rotations gives the matrix of
@ eigenvalues. On real symmetric matricies the Jacobi method is
@ idiotproof.
@
@ I have extended the algorithm by using hyperbolic "rotations" to
@ drive the off-diagonal elements tword each other, thereby allowing the
@ trignometric transform to be more effective. I have no proof that this
@ converges! I have not been able to find a reference for this method.
@ The idea of using hyperbolic transforms was inspired by chapter 17 of
@ reference 3, but this implementation bears no other obvious similarity.
@
@ To illustrate the method let's apply it to the 2 x 2 matrix:
@
@ [[a b]
@ [c d]]
@
@ The standard Jacobi rotation is:
@
@ [[ cos(t) sin(t) ] [[a b] [[ cos(t) -sin(t) ]
@ [-sin(t) cos(t) ]] [c d]] [ sin(t) cos(t) ]]
@
@ where t=ATAN((b+c)/(a-d))/2.
@
@ For real a, c and d and b=c this returns:
@
@ / a+d+SQRT(amd^2+4*c^2)*SIGN(amd) \
@ | -------------------------------- 0 |
@ | 2 |
@ | a+d-SQRT(amd^2+4*c^2)*SIGN(amd) |
@ | 0 ------------------------------- |
@ \ 2 /
@
@ where amd=a-d.
@
@ To drive the off-diagonal elements together we will first apply the
@ hyperbolic transform:
@
@ [[ cosh(t) sinh(t) ] [[a b] [[ cosh(t) -sinh(t) ]
@ [ sinh(t) cosh(t) ]] [c d]] [-sinh(t) cosh(t) ]]
@
@ Where t=ATANH((b-c)/(a-d))/2.
@
@ Once again, for real inputs (with fortunate signs) this returns:
@
@ / a+d+SQRT(amd-bmc)*SQRT(amd+bmc) b+c \
@ | ------------------------------- ----- |
@ | 2 2 |
@ | |
@ | b+c a+d-SQRT(amd-bmc)*SQRT(amd+bmc) |
@ | ----- ------------------------------- |
@ \ 2 2 /
@
@ where amd=a-d and bmc=b-c.
@
@ The transforms work with complex numbers, however the on-diagonal
@ results get much grosser.
@
@ To save matrix math the code utilizes one transform which is the
@ product of the trig and hyperbolic transforms:
@
@ [[ cb*ca+sb*sa cb*sa+ca*sb ] [[a b] [[ cb*ca-sb*sa -cb*sa-ca*sb ]
@ [ cb*sa-ca*sb cb*ca-sb*sa ]] [c d]] [ ca*sb-cb*sa cb*ca+sb*sa ]]
@
@ where cb=COS(beta), sb=SIN(beta), ca=COSH(alpha) and sa=SINH(alpha)
@ and alpha=ATANH(bmc/amd)/2
@ and beta=ATAN((b+c)/(amd*SQRT(1-bmc/amd)*SQRT(1+bmc/amd)))/2
@ and amd=a-d and bmc=b-c.
@
@ (The seemingly redundant math assures that signs, etc. are correct for
@ complex inputs.)
@
@ I had implemented the calculation of the xform matricies without
@ using explicit trig and hyperbolic functions, but getting all of the
@ edge conditions right is a nightmare (at least for me). Since those
@ implementations only sped up the aggragate algorithm by only 2-3% I
@ opted for simplicity.
@
@ Known limitations:
@ Defective matricies cannot be eigenvalue/eigenvector decomposed.
@ Hence, this and any other implementations are doomed in the general
@ case. (See ref 2 pages 353... for an explanation.)
@
@ Although it would be nice, I do not yet perform orthogonalization of
@ the resultant eigenvectors, so matricies with repeated eigenvalues
@ usualy do not yield orthogonal eigenvectors.
@
@ If the input matrix is Real Symmetric, I still use complex math.
@ There is a posibility that roundoff errors will cause complex
@ numbers to spontainiously errupt causing an error if I were to
@ try to stuff them into a real array. Try example matrix H.
@ It would be nice to take advantage of real only math to speed
@ up the algorithm.
@
@ Certain input matricies will contain values which will cause the trig
@ xform to attempt to evaluate ATAN(i) or the hyperbolic xform to
@ evaluate ATANH(1). This indicates that no possible rotation will
@ work in that context. Since no rotation will work, I do no
@ rotation. If no other rotation happens before we get back to this
@ same position on the next pass, we won't converge. Example matrix
@ C exibits this problem during the first pass when rotating at
@ position 1, 2.
@
@ Suspected problems:
@ The test for termination is a hack: I quit when ABS(M) is not much
@ different than ABS(diag(M)), it seems to work fine.
@
@
@ References:
@
@ 1) Matrix Computations, 2nd Edition, 1989
@ Golub, Gene H. and Van Loan, Charles F.
@ The John Hopkins University Press
@ 701 West 40th Street
@ Baltimore, Maryland 21211
@ ISBN 0-8018-3772-3
@ ISBN 0-8018-3772-1 (pbk.)
@
@ 2) Numerical Recipes (in C, Fortran or Pascal)
@ The Art of Scientific Computing
@ Press, William H., Flannery, Brian P.,
@ Teukolsky, Saul A. and Vetterling, William T.
@ Cambridge University Press
@ 510 North Avenue
@ New Rochelle, NY 10801
@ ISBN 0-521-35465-X
@
@ 3) Linear Algebra, vol. II of Handbook for Automatic Computation, 1971
@ Wilkinson, J.H. and Reinsch, C.
@ Springer-Verlag
@ New York
DIR
Eigen
\<<
1 IF FS? THEN CLLCD END @ Nuke this line for no display
RCLF SWAP -55 SF -21 CF -22 SF
DUP RE SWAP IM R\->C
DUP IDN SWAP DUP SIZE 1 GET \-> d
\<<
DO
1 d 1 -
FOR i
i 1 + d
FOR j
1 IF FS? THEN DUP 1 DISP END @ Nuke this line for no display
DUP i j 2 \->LIST GET
OVER j i 2 \->LIST GET
DUP2 + ROT ROT -
3 PICK i DUP 2 \->LIST GET
4 PICK j DUP 2 \->LIST GET -
SWAP OVER IFERR / THEN 0 END ROT ROT
1 4 PICK DUP2 + \v/
ROT ROT - \v/ * * IFERR / THEN 0 END ATAN 2 /
IF DUP ABS 8 > THEN DROP 0 END
DUP COS SWAP SIN ROT ATANH 2 /
IF DUP ABS 8 > THEN DROP 0 END
DUP COSH SWAP SINH
5 PICK IDN DUP \-> T TT
\<<
4 DUPN
SWAP 4 ROLL * SWAP ROT * DUP2 + DUP
'T(i,i)' STO
'TT(j,j)' STO
- DUP
'T(j,j)' STO
'TT(i,i)' STO
4 ROLL * SWAP ROT * DUP2 + DUP
'T(i,j)' STO
NEG 'TT(i,j)' STO
- DUP
'T(j,i)' STO
NEG 'TT(j,i)' STO
T ROT * T ROT * TT *
\>>
NEXT
NEXT
UNTIL 0
1 d
FOR i
OVER { i i } GET DUP CONJ * RE +
NEXT \v/
OVER ABS %CH ABS .000000001 \<=
END
1 d
FOR i
DUP i DUP 2 \->LIST GET SWAP
NEXT DROP
d \->ARRY
\>> ROT STOF
\>>
EFn1 @ Apply function to arbitrary matrix
\<< SWAP Eigen ROT EFn2 \>>
EFn2 @ Apply function f to decomposed matrix
\<< \-> v f @ If M=V*D*V^-1, F(M) is V*F(D)*V^-1
\<< DUP 0 CON
1 OVER SIZE 1 GET @ Loop over the elements of D
FOR i
{ i i } v i GET @ Eval f at each element of D
f EVAL PUT @ And build F(D)
NEXT OVER / SWAP * @ Do V*F(D)*V^-1
\>>
\>>
Tst @ Check Eigen(M) three ways
\<<
DUP Eigen DUP SIZE 1 GET
\-> M T D d
\<< 1 d @ Test Eigenvalues by doing DET(M-v*I)
FOR i @ for each eigenvalue v
M DUP IDN D i GET * - DET
NEXT
d \->ARRY ABS "Det" \->TAG
T DUP TRN CONJ * DUP IDN - @ Check orthogonality of eigenvectors
ABS "Orth" \->TAG @ with ABS of DOT of all pairings
M T D \<< \>> EFn2 - ABS @ Check eigenvectors with
"Res" \->TAG 3 \->LIST @ ABS(M - V * [v] * INV(V))
\>>
\>>
Tst1 @ Test by calculating SIN(M)^2+COS(M)^2
\<< @ for a random matrix M. The result
RAND RAND RAND RAND RAND @ should be close to the identity
RAND RAND RAND RAND @ matrix.
{ 3 3 } \->ARRY
RAND RAND RAND RAND RAND
RAND RAND RAND RAND
{ 3 3 } \->ARRY R\->C
DUP Eigen DUP2
\<< SIN \>> EFn2 DUP *
ROT ROT
\<< COS \>> EFn2 DUP * +
\>>
I
[[ 1 0 0 ]
[ 0 1 0 ]
[ 0 0 1 ]]
A
[[ 1 2 3 ] @ 11/3-2*SQRT(133)*SIN(ATAN(433*SQRT(3)/117)/3)/3
[ 2 4 5 ] @ 11/3+2*SQRT(133)*SIN(ATAN(433*SQRT(3)/117)/3+pi/3)/3
[ 3 5 6 ]] @ 11/3-2*SQRT(133)*COS(ATAN(433*SQRT(3)/117)/3+pi/6)/3
B
[[ 2 0 3 ] @ Exact eigenvalues are 0, 2 and 5
[ -1 -1 -3 ]
[ 1 3 6 ]]
C
[[ 1 0 .01 ] @ Eigenvalues are 11/10
[ .1 1 0 ] @ 19/20-SQRT(3)*i/20
[ 0 1 1 ]] @ 19/20+SQRT(3)*i/20
D
[[ -261 209 -49] @ Eigenvalues are 3, 4 and 10
[ -530 422 -98]
[ -800 631 -144]]
E
[[ (1,1) (0,0) (.01,.01) ] @ Eigenvalues are (11/10)*(1+i)
[ (.1,.1) (1,1) (0,0) ] @ (19/20-SQRT(3)*i/20)*(1+i)
[ (0,0) (1,1) (1,1) ]] @ (19/20+SQRT(3)*i/20)*(1+i)
F
[[ (0,5) (1,2) (2,3) (-3,6) (6,0) ]
[ (-1,2) (0,6) (4,5) (-3,-2) (5,0) ]
[ (-2,3) (-4,5) (0,7) (3,0) (2,0) ]
[ (3,6) (3,-2) (-3,0) (0,-5) (2,1) ]
[ (-6,0) (-5,0) (-2,0) (-2,1) (0,2) ]]
G
[[ 1 2 0 0 ]
[ 2 3 4 0 ]
[ 0 4 5 6 ]
[ 0 0 6 7 ]]
H
[[ -4 1 1 1 ] @ Eigenvalues are 1, 5, 5 and 5
[ 1 -4 1 1 ]
[ 1 1 -4 1 ]
[ 1 1 1 -4 ]]
J
[[ 1 -1 0 0 ]
[ -1 2 -1 0 ]
[ 0 -1 3 -1 ]
[ 0 0 -1 4 ]]
END
@ Ted A Smith
@ PO Box 6308
@ Longmont CO 80501
@ H) (303)651-2092
@ W) (303)665-5492
@ Jul 20, 1991
@ akcs.tasmith@@hpcvbbs.hp.cv.com